9 research outputs found

    Wireless sensor data processing for on-site emergency response

    Get PDF
    This thesis is concerned with the problem of processing data from Wireless Sensor Networks (WSNs) to meet the requirements of emergency responders (e.g. Fire and Rescue Services). A WSN typically consists of spatially distributed sensor nodes to cooperatively monitor the physical or environmental conditions. Sensor data about the physical or environmental conditions can then be used as part of the input to predict, detect, and monitor emergencies. Although WSNs have demonstrated their great potential in facilitating Emergency Response, sensor data cannot be interpreted directly due to its large volume, noise, and redundancy. In addition, emergency responders are not interested in raw data, they are interested in the meaning it conveys. This thesis presents research on processing and combining data from multiple types of sensors, and combining sensor data with other relevant data, for the purpose of obtaining data of greater quality and information of greater relevance to emergency responders. The current theory and practice in Emergency Response and the existing technology aids were reviewed to identify the requirements from both application and technology perspectives (Chapter 2). The detailed process of information extraction from sensor data and sensor data fusion techniques were reviewed to identify what constitutes suitable sensor data fusion techniques and challenges presented in sensor data processing (Chapter 3). A study of Incident Commanders’ requirements utilised a goal-driven task analysis method to identify gaps in current means of obtaining relevant information during response to fire emergencies and a list of opportunities for WSN technology to fill those gaps (Chapter 4). A high-level Emergency Information Management System Architecture was proposed, including the main components that are needed, the interaction between components, and system function specification at different incident stages (Chapter 5). A set of state-awareness rules was proposed, and integrated with Kalman Filter to improve the performance of filtering. The proposed data pre-processing approach achieved both improved outlier removal and quick detection of real events (Chapter 6). A data storage mechanism was proposed to support timely response to queries regardless of the increase in volume of data (Chapter 7). What can be considered as “meaning” (e.g. events) for emergency responders were identified and a generic emergency event detection model was proposed to identify patterns presenting in sensor data and associate patterns with events (Chapter 8). In conclusion, the added benefits that the technical work can provide to the current Emergency Response is discussed and specific contributions and future work are highlighted (Chapter 9)

    DataSheet_1_Genome-wide identification of foxtail millet’s TRX family and a functional analysis of SiNRX1 in response to drought and salt stresses in transgenic Arabidopsis.docx

    No full text
    Thioredoxins (TRXs) are small-molecule proteins with redox activity that play very important roles in the growth, development, and stress resistance of plants. Foxtail millet (Setaria italica) gradually became a model crop for stress resistance research because of its advantages such as its resistance to sterility and its small genome. To date, the thioredoxin (TRX) family has been identified in Arabidopsis thaliana, rice and wheat. However, studies of the TRX family in foxtail millet have not been reported, and the biological function of this family remains unclear. In this study, 35 SiTRX genes were identified in the whole genome of foxtail millet through bioinformatic analysis. According to phylogenetic analysis, 35 SiTRXs can be divided into 13 types. The chromosome distribution, gene structure, cis-elements and conserved protein motifs of 35 SiTRXs were characterized. Three nucleoredoxin (NRX) members were further identified by a structural analysis of TRX family members. The expression patterns of foxtail millet’s SiNRX members under abiotic stresses showed that they have different stress-response patterns. In addition, subcellular localization revealed that SiNRXs were localized to the nucleus, cytoplasm and membrane. Further studies demonstrated that the overexpression of SiNRX1 enhanced Arabidopsis’ tolerance to drought and salt stresses, resulting in a higher survival rate and better growth performance. Moreover, the expression levels of several known stress-related genes were generally higher in overexpressed lines than in the wild-type. Thus, this study provides a general picture of the TRX family in foxtail millet and lay a foundation for further research on the mechanism of the action of TRX proteins on abiotic stresses.</p

    Demographic characteristics and the prevalence of corneal blindness.

    No full text
    a<p>Comparisons between males and females. Blindness in both eyes, χ<sup>2</sup> = 2.18, p = 0.14; blindness in one eye, χ<sup>2</sup> = 3.48, p = 0.06; blindness in at least one eye(total), χ<sup>2</sup> = 5.25, p = 0.02 (statistically significant difference).</p>b<p>Comparisons between different ages. Blindness in both eyes, χ<sup>2</sup> = 131.39, p<0.001 (statistically significant difference); blindness in one eye, χ<sup>2</sup> = 608.80, p<0.001 (statistically significant difference); blindness in at least one eye(total), χ<sup>2</sup> = 739.70, p <0.001 (statistically significant difference).</p>c<p>Comparisons between different education levels. Blindness in both eyes, χ<sup>2</sup> = 100.87, p<0.001 (statistically significant difference); blindness in one eye, χ<sup>2</sup> = 321.00, p<0.001 (statistically significant difference); blindness in at least one eye(total), χ<sup>2</sup> = 416.79, p≤0.001 (statistically significant difference).</p>d<p>Comparisons between rural and urban areas. Blindness in both eyes, χ<sup>2</sup> = 12.60, p<0.001 (statistically significant difference); blindness in one eye, χ<sup>2</sup> = 48.43, p<0.001 (statistically significant difference); blindness in at least one eye(total), χ<sup>2</sup> = 60.74, p<0.001 (statistically significant difference).</p><p>Demographic characteristics and the prevalence of corneal blindness.</p

    Table_2_Evaluation of a computer-aided diagnostic model for corneal diseases by analyzing in vivo confocal microscopy images.docx

    No full text
    ObjectiveIn order to automatically and rapidly recognize the layers of corneal images using in vivo confocal microscopy (IVCM) and classify them into normal and abnormal images, a computer-aided diagnostic model was developed and tested based on deep learning to reduce physicians’ workload.MethodsA total of 19,612 corneal images were retrospectively collected from 423 patients who underwent IVCM between January 2021 and August 2022 from Renmin Hospital of Wuhan University (Wuhan, China) and Zhongnan Hospital of Wuhan University (Wuhan, China). Images were then reviewed and categorized by three corneal specialists before training and testing the models, including the layer recognition model (epithelium, bowman’s membrane, stroma, and endothelium) and diagnostic model, to identify the layers of corneal images and distinguish normal images from abnormal images. Totally, 580 database-independent IVCM images were used in a human-machine competition to assess the speed and accuracy of image recognition by 4 ophthalmologists and artificial intelligence (AI). To evaluate the efficacy of the model, 8 trainees were employed to recognize these 580 images both with and without model assistance, and the results of the two evaluations were analyzed to explore the effects of model assistance.ResultsThe accuracy of the model reached 0.914, 0.957, 0.967, and 0.950 for the recognition of 4 layers of epithelium, bowman’s membrane, stroma, and endothelium in the internal test dataset, respectively, and it was 0.961, 0.932, 0.945, and 0.959 for the recognition of normal/abnormal images at each layer, respectively. In the external test dataset, the accuracy of the recognition of corneal layers was 0.960, 0.965, 0.966, and 0.964, respectively, and the accuracy of normal/abnormal image recognition was 0.983, 0.972, 0.940, and 0.982, respectively. In the human-machine competition, the model achieved an accuracy of 0.929, which was similar to that of specialists and higher than that of senior physicians, and the recognition speed was 237 times faster than that of specialists. With model assistance, the accuracy of trainees increased from 0.712 to 0.886.ConclusionA computer-aided diagnostic model was developed for IVCM images based on deep learning, which rapidly recognized the layers of corneal images and classified them as normal and abnormal. This model can increase the efficacy of clinical diagnosis and assist physicians in training and learning for clinical purposes.</p

    Table_1_Evaluation of a computer-aided diagnostic model for corneal diseases by analyzing in vivo confocal microscopy images.docx

    No full text
    ObjectiveIn order to automatically and rapidly recognize the layers of corneal images using in vivo confocal microscopy (IVCM) and classify them into normal and abnormal images, a computer-aided diagnostic model was developed and tested based on deep learning to reduce physicians’ workload.MethodsA total of 19,612 corneal images were retrospectively collected from 423 patients who underwent IVCM between January 2021 and August 2022 from Renmin Hospital of Wuhan University (Wuhan, China) and Zhongnan Hospital of Wuhan University (Wuhan, China). Images were then reviewed and categorized by three corneal specialists before training and testing the models, including the layer recognition model (epithelium, bowman’s membrane, stroma, and endothelium) and diagnostic model, to identify the layers of corneal images and distinguish normal images from abnormal images. Totally, 580 database-independent IVCM images were used in a human-machine competition to assess the speed and accuracy of image recognition by 4 ophthalmologists and artificial intelligence (AI). To evaluate the efficacy of the model, 8 trainees were employed to recognize these 580 images both with and without model assistance, and the results of the two evaluations were analyzed to explore the effects of model assistance.ResultsThe accuracy of the model reached 0.914, 0.957, 0.967, and 0.950 for the recognition of 4 layers of epithelium, bowman’s membrane, stroma, and endothelium in the internal test dataset, respectively, and it was 0.961, 0.932, 0.945, and 0.959 for the recognition of normal/abnormal images at each layer, respectively. In the external test dataset, the accuracy of the recognition of corneal layers was 0.960, 0.965, 0.966, and 0.964, respectively, and the accuracy of normal/abnormal image recognition was 0.983, 0.972, 0.940, and 0.982, respectively. In the human-machine competition, the model achieved an accuracy of 0.929, which was similar to that of specialists and higher than that of senior physicians, and the recognition speed was 237 times faster than that of specialists. With model assistance, the accuracy of trainees increased from 0.712 to 0.886.ConclusionA computer-aided diagnostic model was developed for IVCM images based on deep learning, which rapidly recognized the layers of corneal images and classified them as normal and abnormal. This model can increase the efficacy of clinical diagnosis and assist physicians in training and learning for clinical purposes.</p
    corecore